Goto

Collaborating Authors

 latest artificial intelligence benchmarking test


Google Dethrones NVIDIA With Split Results In Latest Artificial Intelligence Benchmarking Tests

#artificialintelligence

Digital transformation is responsible for artificial intelligence workloads being created at an unprecedented scale. These workloads require corporations to collect and store mountains of data. Even as business intelligence is being extracted from current machine learning models, new data inflows are being used to create new models and update existing models. Building AI models is complex and expensive. It is also very much different than traditional software development.


NVIDIA Crushes Latest Artificial Intelligence Benchmarking Tests

#artificialintelligence

In its third round of submissions, MLCommons released results for MLPerf Inference v1.0. MLPerf is a set of standard AI inference benchmarking tests using seven different applications. These seven tests include a range of workloads that include computer vision, medical imaging, recommender systems, speech recognition, and natural language processing. MLPerf benchmarking measures how fast a trained neural network can process data for each application and its form factor. The results allow unbiased comparison between systems.